Regularized Kernel Recursive Least Square Algoirthm

نویسنده

  • Songlin Zhao
چکیده

In most adaptive signal processing applications, system linearity is assumed and adaptive linear filters are thus used. The traditional class of supervised adaptive filters rely on error-correction learning for their adaptive capability. The kernel method is a powerful nonparametric modeling tool for pattern analysis and statistical signal processing. Through a nonlinear mapping, kernel methods transform the data into a set of points in a Reproducing Kernel Hilbert Space. KRLS achieves high accuracy and has fast convergence rate in stationary scenario. However the good performance is obtained at a cost of high computation complexity. Sparsification in kernel methods is know to related to less computational complexity and memory consumption 1 Linear And Nonlinear Adaptive Filter The term filter usually refers to a system that is designed to extract information about a prescribed quantity of interest from noisy data. An adaptive filter is a filter that self-adjusts its input output mapping according to an optimization algorithm like predictive coding [2] mostly driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. With the available processing capabilities of current digital signal processors, adaptive filters have become much more popular and are now widely used in various fields such as sound wave based communication devices [23], face extraction [3, 26], camcorders and digital cameras, information retrieval [25] and medical monitoring equipments. 1.1 Linear Adaptive Filter In most adaptive signal processing applications, system linearity is assumed and adaptive linear filters are thus used. The traditional class of supervised adaptive filters rely on error-correction learning for their adaptive capability. Therefore, the error is the necessary element of a cost function, which is a criterion for optimum performance of the filter. The linear filter includes a set of adaptively adjustable parameters (also known as weights), which is marked as ω(n− 1), where n denotes discrete time. The input signal u(n) applied to the filter at time n, produces the actual response y(n) via y(n) = ω(n− 1)u(n) Then this actual response is compared with the corresponding desired response d(n) to produce the error signal e(n). The error signal, in turn, acts as a guide to adjust the weights ω(n− 1) by an incremental value denoted by △ω(n). On the next iteration, ω(n) becomes the latest value of the weights to be updated. The adaptive filtering process is repeated continuously until the filter reaches a stop condition, which normally is that the weights adjustment is small enough. An important issue in the adaptive design, no matter linear or nonlinear adaptive filter, is to ensure the learning curve is convergent with an increasing number of iterations. Under this condition, we define the system is in a steady-state. 1.2 Nonlinear Adaptive Filter Even though linear adaptive filtering can approximate non-linearity, the performance of adaptive linear filters is not satisfactory in applications where nonlinearities are significant. Hence more advanced nonlinear models are required.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Square Root Extended Kernel Recursive Least Squares Algorithm for Nonlinear Channel Equalization

This study presents a square root version of extended kernel recursive least square algorithm. Basically main idea is to overcome the divergence phenomena arise in the computation of weights of the extended kernel recursive least squares algorithm. Numerically stable givens orthogonal transformations are used to obtain the next iteration of the algorithm. The usefulness of the proposed algorith...

متن کامل

Learning Rates of Least-Square Regularized Regression

This paper considers the regularized learning algorithm associated with the leastsquare loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and the capacity of the reproducing kernel Hilbert...

متن کامل

Asymptotics of Gaussian Regularized Least Squares

We consider regularized least-squares (RLS) with a Gaussian kernel. We prove that if we let the Gaussian bandwidth σ → ∞ while letting the regularization parameter λ→ 0, the RLS solution tends to a polynomial whose order is controlled by the rielative rates of decay of 1 σ2 and λ: if λ = σ−(2k+1), then, as σ →∞, the RLS solution tends to the kth order polynomial with minimal empirical error. We...

متن کامل

Indoor Localization via Discriminatively Regularized Least Square Classification

In this paper, we address the received signal strength (RSS)-based indoor localization problem in a wireless local area network (WLAN) environment and formulate it as a multi-class classification problem using survey locations as classes. We present a discriminatively regularized least square classifier (DRLSC)-based localization algorithm that is aimed at making use of the class label informat...

متن کامل

Kernel Least Mean Square Algorithm

A simple, yet powerful, learning method is presented by combining the famed kernel trick and the least-mean-square (LMS) algorithm, called the KLMS. General properties of the KLMS algorithm are demonstrated regarding its well-posedness in very high dimensional spaces using Tikhonov regularization theory. An experiment is studied to support our conclusion that the KLMS algorithm can be readily u...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1508.07103  شماره 

صفحات  -

تاریخ انتشار 2015